通过潜在树形图形模型建模高维数据的分布是多个科学域中的一种普遍存在的方法。常见的任务是推断底层树结构,仅给出其终端节点的观察。树恢复的许多算法是计算密集型的,这将其适用于中等大小的树木。对于大树,一种共同的方法,被称为剥夺和征服,是以两步恢复树结构。首先,将结构分别恢复终端节点的多个可能随机子集。其次,合并生成的子树以形成一棵树。在这里,我们开发频谱自上而下的恢复(STDR),确定性分割和征服方法来推断出大潜在树模型。与以前的方法不同,STDR基于与观察到的节点相关的合适的LAPLACIAN矩阵的FIEDLER向量,以非随机方式分配终端节点。我们证明,在某些条件下,这种分区与树结构一致。反过来,这导致了小远子的显着更简单的合并程序。我们证明了STDR在统计上是一致的,并绑定了以高概率准确恢复树所需的样本数量。使用来自近几种常见树模型的模拟数据在系统发育中,我们证明STDR在运行时具有显着的优势,具有改善或类似的准确性。
translated by 谷歌翻译
我们先前的实验表明,人类和机器似乎采用了不同的方法来歧视说话者歧视,尤其是在说话风格可变性的情况下。实验检查了阅读与对话演讲。听众专注于特定于说话者的特质,同时“一起告诉说话者”,以及“告诉说话者分开”时共享声学空间的相对距离。但是,无论目标或非目标试验如何,自动扬声器验证(ASV)系统使用相同的损失函数。为了在风格变异性的存在下提高ASV性能,从人类感知中学到的见解被用来设计一种新的训练损失功能,我们称为“ CLLRCE损失”。 CLLRCE损失既使用说话者特异性的特质,又使用扬声器之间的相对声学距离来训练ASV系统。当使用UCLA扬声器可变性数据库时,在X-Vector和条件设置中,CLLCE损失使EER显着相对改善1-66%,而MindCF分别与1-31%和1-56%相比,相比之下X矢量基线。使用涉及不同的对话语音任务的SITW评估任务,拟议的损失与自我发项式调节结合,导致EER的显着相对改善2-5%,而MindCF则比基线高6-12%。在SITW案例中,绩效的改善仅与调理保持一致。
translated by 谷歌翻译
我们提出了一种提取说话者嵌入的方法,这些嵌入者对文本独立的说话者验证中的口语风格变化很强。通常,嵌入提取的扬声器包括训练DNN进行扬声器分类以及使用瓶颈功能作为扬声器表示。这样的网络具有一个合并层,可以通过在所有话语框架上计算统计数据,以相等的权重来转换框架级别为话语级特征。但是,自动锻炼的嵌入执行加权池,使其重量与在扬声器分类任务中框架的重要性相对应。熵可以捕获由于说话样式变化而导致的声学变化。因此,提出了一个基于熵的变量帧速率向量作为自我发项层的外部条件向量,以向网络提供可以解决样式效应的信息。这项工作探讨了五种不同的调理方法。最好的调理方法,与门控的串联,在12/23任务中为X-Vector基线提供了统计学上的显着改进,并且在使用UCLA扬声器可变性数据库时,与11/23任务中的基线相同。在9/23任务中,它也明显胜过自我注意力,而在1/23的任务中也更糟。该方法还显示了SITW的多扬声器方案的显着改善。
translated by 谷歌翻译
人工智能/机器学习方法的进步提供了在科学研究中具有广泛适用性的工具。这些技术正在跨越核物理研究主题的多样性,从而推进将有助于科学发现和社会应用。该审查提供了一种由人工智能和机器学习技术改变的核物理研究的快照。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译
In this paper, we propose a novel framework dubbed peer learning to deal with the problem of biased scene graph generation (SGG). This framework uses predicate sampling and consensus voting (PSCV) to encourage different peers to learn from each other, improving model diversity and mitigating bias in SGG. To address the heavily long-tailed distribution of predicate classes, we propose to use predicate sampling to divide and conquer this issue. As a result, the model is less biased and makes more balanced predicate predictions. Specifically, one peer may not be sufficiently diverse to discriminate between different levels of predicate distributions. Therefore, we sample the data distribution based on frequency of predicates into sub-distributions, selecting head, body, and tail classes to combine and feed to different peers as complementary predicate knowledge during the training process. The complementary predicate knowledge of these peers is then ensembled utilizing a consensus voting strategy, which simulates a civilized voting process in our society that emphasizes the majority opinion and diminishes the minority opinion. This approach ensures that the learned representations of each peer are optimally adapted to the various data distributions. Extensive experiments on the Visual Genome dataset demonstrate that PSCV outperforms previous methods. We have established a new state-of-the-art (SOTA) on the SGCls task by achieving a mean of \textbf{31.6}.
translated by 谷歌翻译